620 research outputs found

    The neural mechanisms underlying the impact of altruistic outcomes on the process of deception: from the perspectives of communicators and recipients

    Get PDF
    Investigating the process of deception is crucial for our understanding of lying behaviors. In this dissertation, three studies were performed to investigate: 1) the neural bases of lying and truth-telling in two different experimental paradigms and 2) the impact of the altruistic outcomes (i.e., the outcomes of the acts that financially benefit others) on the processes of lies and truth. In Study 1, participants provided (un)truthful responses either on one’s own initiative in the spontaneous paradigm or by following others’ instructions in the instructed paradigm. The behavioral results suggest that the free choice of making one’s own decisions is one of the key components of the concept of “lies.” At the neural level, the ventral lateral prefrontal cortex, the dorsal lateral prefrontal cortex, and the inferior parietal lobe showed different activation patterns in the two different paradigms. The results suggest that these regions might provide cognitive control over the temptation of dishonest gain, particularly in the paradigms that allow individuals to freely make their decisions. In Study 2 and Study 3, the outcomes of lying/truth-telling behaviors were manipulated to investigate the neural correlates of the impact of altruistic outcomes on the processes of the behaviors in both the communicators and the recipients. The results showed that the altruistic outcomes of moral behaviors mainly modulated neural activity in the nucleus accumbens, the amygdala, and the anterior insula. The nucleus accumbens might be sensitive to both social rewards and monetary rewards. The amygdala might be involved in generating emotional responses to social outcomes, whereas the anterior insula might code deviations from socially or morally acceptable acts. Taken together, the results suggest that the neural processes underlying deception in the frontoparietal network, the limbic system, the mesolimbic system, and the insula cortex are associated with the psychological processes of deception, including cognitive control, reward coding, and emotional responses. The findings extend our knowledge of the neural processes underlying lies and truth in different contexts and with different outcomes

    CNN Based 3D Facial Expression Recognition Using Masking And Landmark Features

    Get PDF
    Automatically recognizing facial expression is an important part for human-machine interaction. In this paper, we first review the previous studies on both 2D and 3D facial expression recognition, and then summarize the key research questions to solve in the future. Finally, we propose a 3D facial expression recognition (FER) algorithm using convolutional neural networks (CNNs) and landmark features/masks, which is invariant to pose and illumination variations due to the solely use of 3D geometric facial models without any texture information. The proposed method has been tested on two public 3D facial expression databases: BU-4DFE and BU-3DFE. The results show that the CNN model benefits from the masking, and the combination of landmark and CNN features can further improve the 3D FER accuracy

    List Decodability at Small Radii

    Full text link
    A(n,d,e)A'(n,d,e), the smallest \ell for which every binary error-correcting code of length nn and minimum distance dd is decodable with a list of size \ell up to radius ee, is determined for all d2e3d\geq 2e-3. As a result, A(n,d,e)A'(n,d,e) is determined for all e4e\leq 4, except for 42 values of nn.Comment: to appear in Designs, Codes, and Cryptography (accepted October 2010

    An alternating direction and projection algorithm for structure-enforced matrix factorization

    Get PDF
    Structure-enforced matrix factorization (SeMF) represents a large class of mathematical models appearing in various forms of principal component analysis, sparse coding, dictionary learning and other machine learning techniques useful in many applications including neuroscience and signal processing. In this paper, we present a unified algorithm framework, based on the classic alternating direction method of multipliers (ADMM), for solving a wide range of SeMF problems whose constraint sets permit low-complexity projections. We propose a strategy to adaptively adjust the penalty parameters which is the key to achieving good performance for ADMM. We conduct extensive numerical experiments to compare the proposed algorithm with a number of state-of-the-art special-purpose algorithms on test problems including dictionary learning for sparse representation and sparse nonnegative matrix factorization. Results show that our unified SeMF algorithm can solve different types of factorization problems as reliably and as efficiently as special-purpose algorithms. In particular, our SeMF algorithm provides the ability to explicitly enforce various combinatorial sparsity patterns that, to our knowledge, has not been considered in existing approaches

    Set Operation Aided Network For Action Units Detection

    Get PDF
    As a large number of parameters exist in deep model-based methods, training such models usually requires many fully AU-annotated facial images. This is true with regard to the number of frames in two widely used datasets: BP4D [31] and DISFA [18], while those frames were captured from a small number of subjects (41, 27 respectively). This is problematic, as subjects produce highly consistent facial muscle movements, adding more frames per subject would only adds more close points in the feature space, and thus the classifier does not benefit from those extra frames. Data augmentation methods can be applied to alleviate the problem to a certain degree, but they fail to augment new subjects. We propose a novel Set Operation Aided Network (SO-Net) for action units\u27 detection. Specifically, new features and the corresponding labels are generated by adding set operations to both the feature and label spaces. The generated new features can be treated as a representation of a hypothetical image. As a result, we can implicitly obtain training examples beyond what was originally observed in the dataset. Therefore, the deep model is forced to learn subject-independent features and is generalizable to unseen subjects. SO-Net is end-to-end trainable and can be flexibly plugged in any CNN model during training. We evaluate the proposed method on two public datasets, BP4D and DISFA. The experiment shows a state-of-the-art performance, demonstrating the effectiveness of the proposed method

    Facial Expression Recognition By De-expression Residue Learning

    Get PDF
    A facial expression is a combination of an expressive component and a neutral component of a person. In this paper, we propose to recognize facial expressions by extracting information of the expressive component through a de-expression learning procedure, called De-expression Residue Learning (DeRL). First, a generative model is trained by cGAN. This model generates the corresponding neutral face image for any input face image. We call this procedure de-expression because the expressive information is filtered out by the generative model; however, the expressive information is still recorded in the intermediate layers. Given the neutral face image, unlike previous works using pixel-level or feature-level difference for facial expression classification, our new method learns the deposition (or residue) that remains in the intermediate layers of the generative model. Such a residue is essential as it contains the expressive component deposited in the generative model from any input facial expression images. Seven public facial expression databases are employed in our experiments. With two databases (BU-4DFE and BP4D-spontaneous) for pre-training, the DeRL method has been evaluated on five databases, CK+, Oulu-CASIA, MMI, BU-3DFE, and BP4D+. The experimental results demonstrate the superior performance of the proposed method

    Identity-adaptive Facial Expression Recognition Through Expression Regeneration Using Conditional Generative Adversarial Networks

    Get PDF
    Subject variation is a challenging issue for facial expression recognition, especially when handling unseen subjects with small-scale labeled facial expression databases. Although transfer learning has been widely used to tackle the problem, the performance degrades on new data. In this paper, we present a novel approach (so-called IA-gen) to alleviate the issue of subject variations by regenerating expressions from any input facial images. First of all, we train conditional generative models to generate six prototypic facial expressions from any given query face image while keeping the identity related information unchanged. Generative Adversarial Networks are employed to train the conditional generative models, and each of them is designed to generate one of the prototypic facial expression images. Second, a regular CNN (FER-Net) is fine- tuned for expression classification. After the corresponding prototypic facial expressions are regenerated from each facial image, we output the last FC layer of FER-Net as features for both the input image and the generated images. Based on the minimum distance between the input image and the generated expression images in the feature space, the input image is classified as one of the prototypic expressions consequently. Our proposed method can not only alleviate the influence of inter-subject variations but will also be flexible enough to integrate with any other FER CNNs for person-independent facial expression recognition. Our method has been evaluated on CK+, Oulu-CASIA, BU-3DFE and BU-4DFE databases, and the results demonstrate the effectiveness of our proposed method

    Maximum Distance Separable Codes for Symbol-Pair Read Channels

    Full text link
    We study (symbol-pair) codes for symbol-pair read channels introduced recently by Cassuto and Blaum (2010). A Singleton-type bound on symbol-pair codes is established and infinite families of optimal symbol-pair codes are constructed. These codes are maximum distance separable (MDS) in the sense that they meet the Singleton-type bound. In contrast to classical codes, where all known q-ary MDS codes have length O(q), we show that q-ary MDS symbol-pair codes can have length \Omega(q^2). In addition, we completely determine the existence of MDS symbol-pair codes for certain parameters

    Application of bolt joints dynamic parameters identification in machine tools based on partially measured frequency response functions

    Get PDF
    This paper presents a method to identify the bolt joints dynamic parameters based on partially measured frequency response functions (FRFs) and demonstrates its application in machine tools. Basic formulas are derived to identify the joint dynamic properties based on the substructuring method and an algorithm to estimate the unmeasured FRFs is also developed. The identification avoids direct inverse calculation to the frequency response function matrix, and its validity is demonstrated by comparing the simulated and measured FRFs of the assembled free-free steel beams with a bolt joint. An approach is put forward to apply the identification in machine tools by constructing structures assembled of substructures and joint structures to substitute the bolt joints in machine tools and assuring the contact conditions unchanged. The identification of the bed-column bolt joint in a vertical machining center is provided to describe the application procedure and show the feasibility of the proposed approach
    corecore